Goto

Collaborating Authors

 lp solution



7a62d9a4c03377d1175b8859b4cc16d4-Supplemental-Conference.pdf

Neural Information Processing Systems

The original bandits with knapsacks problem assumes than when this happens, the process of learning and gaining rewards ceases. The key distinction between that model and ours is that we instead assume the learner is allowed remain idle until the supply of every resource becomes positive again, at which point the learning process recommences.



A Polynomial-Time Approximation for Pairwise Fair $k$-Median Clustering

Bandyapadhyay, Sayan, Chlamtáč, Eden, Makarychev, Yury, Vakilian, Ali

arXiv.org Artificial Intelligence

Clustering is a fundamental task in theoretical computer science and machine learning aimed at dividing a set of data items into several groups or clusters, such that each group contains similar data items. Typically, the similarity between data items is measured using a metric distance function. Clustering is often modeled as an optimization problem where the objective is to minimize a global cost function that reflects the quality of the clusters; this function varies depending on the application. Among the many cost functions studied for clustering, the most popular are k-median, k-means, and k-center. These objectives generally aim to minimize the variance within the clusters, serving as a proxy for grouping similar data items In this work, we study clustering problems with fairness constraints, commonly known as fair clustering problems. Fair clustering emerged as one of the most active research areas in algorithms motivated by the recent trend of research on fairness in artificial intelligence. In a seminal work, Chierichetti et al. [18] introduced a fair clustering problem, where given a set R of red points, a set B of blue points, and an integer balance parameter t 1, a clustering is said to be balanced if, in every cluster, the number of red points is at least 1/t times the number of blue points and at most t times the number of blue points.


692f93be8c7a41525c0baf2076aecfb4-Reviews.html

Neural Information Processing Systems

This paper proposes an approach to exact energy minimization in discrete graphical models. The key idea is as follows: The LP relaxation of the problem allows to identify, via arc consistency/weak tree agreement, nodes for which the LP solution is also optimal in the discrete sense. The nodes for which discrete optimality cannot be established from the solution of the LP then define a subproblem, a hopefully small graph, which is solved exactly using a combinatorial solver. One of the contributions of the paper is to show that, if the combinatorial solution's boundary is consistent with the optimal part of the LP solution, the global optimum has been established. If the condition is not met, the combinatorial search area must be grown by the set of variables for which boundary consistency does not hold.


Cutting Plane Selection with Analytic Centers and Multiregression

Turner, Mark, Berthold, Timo, Besançon, Mathieu, Koch, Thorsten

arXiv.org Artificial Intelligence

Cutting planes are a crucial component of state-of-the-art mixed-integer programming solvers, with the choice of which subset of cuts to add being vital for solver performance. We propose new distance-based measures to qualify the value of a cut by quantifying the extent to which it separates relevant parts of the relaxed feasible set. For this purpose, we use the analytic centers of the relaxation polytope or of its optimal face, as well as alternative optimal solutions of the linear programming relaxation. We assess the impact of the choice of distance measure on root node performance and throughout the whole branch-and-bound tree, comparing our measures against those prevalent in the literature. Finally, by a multi-output regression, we predict the relative performance of each measure, using static features readily available before the separation process. Our results indicate that analytic center-based methods help to significantly reduce the number of branch-and-bound nodes needed to explore the search space and that our multiregression approach can further improve on any individual method.


Non-monotonic Resource Utilization in the Bandits with Knapsacks Problem

Kumar, Raunak, Kleinberg, Robert

arXiv.org Artificial Intelligence

Bandits with knapsacks (BwK) is an influential model of sequential decision-making under uncertainty that incorporates resource consumption constraints. In each round, the decision-maker observes an outcome consisting of a reward and a vector of nonnegative resource consumptions, and the budget of each resource is decremented by its consumption. In this paper we introduce a natural generalization of the stochastic BwK problem that allows non-monotonic resource utilization. In each round, the decision-maker observes an outcome consisting of a reward and a vector of resource drifts that can be positive, negative or zero, and the budget of each resource is incremented by its drift. Our main result is a Markov decision process (MDP) policy that has constant regret against a linear programming (LP) relaxation when the decision-maker knows the true outcome distributions. We build upon this to develop a learning algorithm that has logarithmic regret against the same LP relaxation when the decision-maker does not know the true outcome distributions. We also present a reduction from BwK to our model that shows our regret bound matches existing results.


Approximation Algorithms for Socially Fair Clustering

Makarychev, Yury, Vakilian, Ali

arXiv.org Machine Learning

We present an $(e^{O(p)} \frac{\log \ell}{\log\log\ell})$-approximation algorithm for socially fair clustering with the $\ell_p$-objective. In this problem, we are given a set of points in a metric space. Each point belongs to one (or several) of $\ell$ groups. The goal is to find a $k$-medians, $k$-means, or, more generally, $\ell_p$-clustering that is simultaneously good for all of the groups. More precisely, we need to find a set of $k$ centers $C$ so as to minimize the maximum over all groups $j$ of $\sum_{u \text{ in group }j} d(u,C)^p$. The socially fair clustering problem was independently proposed by Abbasi, Bhaskara, and Venkatasubramanian [2021] and Ghadiri, Samadi, and Vempala [2021]. Our algorithm improves and generalizes their $O(\ell)$-approximation algorithms for the problem. The natural LP relaxation for the problem has an integrality gap of $\Omega(\ell)$. In order to obtain our result, we introduce a strengthened LP relaxation and show that it has an integrality gap of $\Theta(\frac{\log \ell}{\log\log\ell})$ for a fixed $p$. Additionally, we present a bicriteria approximation algorithm, which generalizes the bicriteria approximation of Abbasi et al. [2021].


Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

Lang, Hunter, Reddy, Aravind, Sontag, David, Vijayaraghavan, Aravindan

arXiv.org Machine Learning

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This "noisy stable" model realistically fits with practical MAP inference problems: we design an algorithm for finding "close" stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.


Optimality of Approximate Inference Algorithms on Stable Instances

Lang, Hunter, Sontag, David, Vijayaraghavan, Aravindan

arXiv.org Artificial Intelligence

Approximate algorithms for structured prediction problems---such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001)---typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of alpha-expansion and an LP relaxation algorithm on MAP inference in Ferromagnetic Potts models (FPMs). Our main results give stability conditions under which these two algorithms provably recover the optimal MAP solution. These theoretical results complement numerous empirical observations of good performance.